skip to main content


Search for: All records

Creators/Authors contains: "Greenstadt, Rachel"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Deepfakes have become a dual-use technology with applications in the domains of art, science, and industry. However, the technology can also be leveraged maliciously in areas such as disinformation, identity fraud, and harassment. In response to the technology's dangerous potential many deepfake creation communities have been deplatformed, including the technology's originating community – r/deepfakes. Opening in February 2018, just eight days after the removal of r/deepfakes, MrDeepFakes (MDF) went online as a privately owned platform to fulfill the role of community hub, and has since grown into the largest dedicated deepfake creation and discussion platform currently online. This position of community hub is balanced against the site's other main purpose, which is the hosting of deepfake pornography depicting public figures- produced without consent. In this paper we explore the two largest deepfake communities that have existed via a mixed methods approach utilizing quantitative and qualitative analysis. We seek to identify how these platforms were and are used by their members, what opinions these deepfakers hold about the technology and how it is seen by society at large, and identify how deepfakes-as-disinformation is viewed by the community. We find that there is a large emphasis on technical discussion on these platforms, intermixed with potentially malicious content. Additionally, we find the deplatforming of deepfake communities early in the technology's life has significantly impacted trust regarding alternative community platforms. 
    more » « less
    Free, publicly-accessible full text available September 18, 2024
  2. This qualitative study examines the privacy challenges perceived by librarians who afford access to physical and electronic spaces and are in a unique position of safeguarding the privacy of their patrons. As internet “service providers,” librarians represent a bridge between the physical and internet world, and thus offer a unique sight line to the convergence of privacy, identity, and social disadvantage. Drawing on interviews with 16 librarians, we describe how they often interpret or define their own rules when it comes to privacy to protect patrons who face challenges that stem from structures of inequality outside their walls. We adopt the term “intersectional thinking” to describe how librarians reported thinking about privacy solutions, which is focused on identity and threats of structural discrimination (the rules, norms, and other determinants of discrimination embedded in institutions and other societal structures that present barriers to certain groups or individuals), and we examine the role that low/no-tech strategies play in ameliorating these threats. We then discuss how librarians act as privacy intermediaries for patrons, the potential analogue for this role for developers of systems, the power of low/no-tech strategies, and implications for design and research of privacy-enhancing technologies (PETs).

     
    more » « less
  3. Free, publicly-accessible full text available April 30, 2024
  4. Many online communities rely on postpublication moderation where contributors-even those that are perceived as being risky-are allowed to publish material immediately and where moderation takes place after the fact. An alternative arrangement involves moderating content before publication. A range of communities have argued against prepublication moderation by suggesting that it makes contributing less enjoyable for new members and that it will distract established community members with extra moderation work. We present an empirical analysis of the effects of a prepublication moderation system called FlaggedRevs that was deployed by several Wikipedia language editions. We used panel data from 17 large Wikipedia editions to test a series of hypotheses related to the effect of the system on activity levels and contribution quality. We found that the system was very effective at keeping low-quality contributions from ever becoming visible. Although there is some evidence that the system discouraged participation among users without accounts, our analysis suggests that the system's effects on contribution volume and quality were moderate at most. Our findings imply that concerns regarding the major negative effects of prepublication moderation systems on contribution quality and project productivity may be overstated.

     
    more » « less
  5. People Search Websites aggregate and publicize users’ Personal Identifiable Information (PII), previously sourced from data brokers. This paper presents a qualitative study of the perceptions and experiences of 18 participants who sought information removal by hiring a removal service or requesting removal from the sites. The users we interviewed were highly motivated and had sophisticated risk perceptions. We found that they encountered obstacles during the removal process, resulting in a high cost of removal, whether they requested it themselves or hired a service. Participants perceived that the successful monetization of users PII motivates data aggregators to make the removal more difficult. Overall, self management of privacy by attempting to keep information off the internet is difficult and its’ success is hard to evaluate. We provide recommendations to users, third parties, removal services and researchers aiming to improve the removal process. 
    more » « less
  6. null (Ed.)
    A growing body of research suggests that intimate partner abusers use digital technologies to surveil their partners, including by installing spyware apps, compromising devices and online accounts, and employing social engineering tactics. However, to date, this form of privacy violation, called intimate partner surveillance (IPS), has primarily been studied from the perspective of victim-survivors. We present a qualitative study of how potential perpetrators of IPS harness the emotive power of sharing personal narratives to validate and legitimise their abusive behaviours. We analysed 556 stories of IPS posted on publicly accessible online forums dedicated to the discussion of sexual infidelity. We found that many users share narrative posts describing IPS as they boast about their actions, advise others on how to perform IPS without detection, and seek suggestions for next steps to take. We identify a set of common thematic story structures, justifications for abuse, and outcomes within the stories that provide a window into how these individuals believe their behaviour to be justified. Using these stories, we develop a four-stage framework that captures the change in a potential perpetrator's approach to IPS. We use our findings and framework to guide a discussion of efforts to combat abuse, including how we can identify crucial moments where interventions might be safely applied to prevent or deescalate IPS. 
    more » « less
  7. Cappellato, Linda ; Eickhoff, Carsten ; Ferro, Nicola ; Névéol, Aurélie (Ed.)
    This paper describes the approach we took to create a machine learning model for the PAN 2020 Authorship Verification Task. For each document pair, we extracted stylometric features from the documents and used the absolute difference between the feature vectors as input to our classifier. We created two models: a Logistic Regression Model trained on a small dataset, and a Neural Network based model trained on the large dataset. These models achieved AUCs of 0.939 and 0.953 on the small and large datasets, making them the second-best models on both datasets submitted to the shared task. 
    more » « less